Multi-Classifier Adversarial Optimization for Active Learning
نویسندگان
چکیده
Active learning (AL) aims to find a better trade-off between labeling costs and model performance by consciously selecting more informative samples label. Recently, adversarial approaches have emerged as effective solutions. Most of them leverage generative networks align feature distributions labeled unlabeled data, upon which discriminators are trained distinguish them. However, these methods fail consider the relationship decision boundaries, their training processes often complex unstable. To this end, paper proposes novel AL method, namely multi-classifier optimization for active (MAOAL). MAOAL employs task-specific boundaries data alignment while most fulfill this, we introduce classifier class confusion (C3) metric, represents discrepancy inter-class correlation outputs. Without any additional hyper-parameters, C3 metric further reduces negative impacts ambiguous in process distribution sample selection. More concretely, network is adversarially adding two auxiliary classifiers, reducing bias minimizing loss tighter highlighting hard maximizing loss. Finally, with highest selected Extensive experiments demonstrate superiority our approach over state-of-the-art terms image classification object detection.
منابع مشابه
Active Learning for Multi-Objective Optimization
In many fields one encounters the challenge of identifying, out of a pool of possible designs, those that simultaneously optimize multiple objectives. This means that usually there is not one optimal design but an entire set of Pareto-optimal ones with optimal tradeoffs in the objectives. In many applications, evaluating one design is expensive; thus, an exhaustive search for the Pareto-optimal...
متن کاملGenerative Adversarial Active Learning
We propose a new active learning by query synthesis approach using Generative Adversarial Networks (GAN). Different from regular active learning, the resulting algorithm adaptively synthesizes training instances for querying to increase learning speed. We generate queries according to the uncertainty principle, but our idea can work with other active learning principles. We report results from ...
متن کاملIncremental Classifier Learning with Generative Adversarial Networks
In this paper, we address the incremental classifier learning problem, which suffers from catastrophic forgetting. The main reason for catastrophic forgetting is that the past data are not available during learning. Typical approaches keep some exemplars for the past classes and use distillation regularization to retain the classification capability on the past classes and balance the past and ...
متن کاملAdversarial vulnerability for any classifier
Despite achieving impressive and often superhuman performance on multiple benchmarks, state-of-the-art deep networks remain highly vulnerable to perturbations: adding small, imperceptible, adversarial perturbations can lead to very high error rates. Provided the data distribution is defined using a generative model mapping latent vectors to datapoints in the distribution, we prove that no class...
متن کاملAdversarial Multi-task Learning for Text Classification
Neural network models have shown their promising opportunities for multi-task learning, which focus on learning the shared layers to extract the common and task-invariant features. However, in most existing approaches, the extracted shared features are prone to be contaminated by task-specific features or the noise brought by other tasks. In this paper, we propose an adversarial multi-task lear...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence
سال: 2023
ISSN: ['2159-5399', '2374-3468']
DOI: https://doi.org/10.1609/aaai.v37i6.25932